1
The Spectrum of Model Adaptation
AI030 Lesson 5
00:00

Imagine a Large Language Model (LLM) as a brilliant but generalist scholar. To transform this generalist into a specialized professionalβ€”such as a clinical radiologist or a contract lawyerβ€”we navigate the Spectrum of Model Adaptation. This spectrum defines how we move from zero-shot prompting to deep neural modifications, balancing hardware constraints against the demand for State-of-the-art (SOTA) results.

The Adaptation Continuum Control & Stability (Increasing β†’) In-Context Learning PEFT / LoRA Full Fine-Tuning

Key Adaptation Modes

  • In-Context Learning (ICL): The model remains "frozen." It learns to estimate $P(y|x)$ by observing examples within the prompt itself. While fast, it often suffers from high variance and hallucination.
  • Alignment & Stability: To reach production-grade reliability, we must move right on the spectrum. Fine-tuning provides better Alignment with human judgment by explicitly penalizing deviations from ground-truth patterns.
  • The SOTA Objective: Achieving top-tier performance requires navigating trade-offs. Full tuning offers maximum control but risks "catastrophic forgetting," while PEFT (Parameter-Efficient Fine-Tuning) provides a hardware-friendly middle ground.
Real-World Example
Consider a medical assistant. Using ICL, you provide three symptom-to-diagnosis examples in the prompt. Using Fine-tuning, you train the model on 50,000 medical records. The latter results in a model that inherently understands clinical jargon and exhibits far higher Consistency and Stability.